new alternating direction method
A New Alternating Direction Method for Linear Programming
However, such a rate is related to the problem dimension and the algorithm exhibits a slow and fluctuating ``tail convergence'' in practice. In this paper, we propose a new variable splitting method of LP and prove that our method has a convergence rate of $O(\|\mathbf{A}\|^2\log(1/\epsilon))$. The proof is based on simultaneously estimating the distance from a pair of primal dual iterates to the optimal primal and dual solution set by certain residuals. In practice, we result in a new first-order LP solver that can exploit both the sparsity and the specific structure of matrix $\mathbf{A}$ and a significant speedup for important problems such as basis pursuit, inverse covariance matrix estimation, L1 SVM and nonnegative matrix factorization problem compared with current fastest LP solvers.
Reviews: A New Alternating Direction Method for Linear Programming
This paper develops a novel alternating direction based method for linear programming problems. The paper presents global convergence results, and a linear rate, for their algorithm. As far as I could see, the mathematics appears to be sound, although I did not check thoroughly. Numerical experiments were also presented that support the practical benefits of this new approach; this new algorithm is compared with two other algorithms and the results seem favorable. Note that the authors call their algorithm FADMM - my suggestion is that the authors choose a different acronym because (i) there are several other ADMM variants already called FADMM, and (ii) this is an ADMM for LP so it might be more appropriate to call it something like e.g., LPADMM, which is a more descriptive acronym.
A New Alternating Direction Method for Linear Programming
It is well known that, for a linear program (LP) with constraint matrix $\mathbf{A}\in\mathbb{R} {m\times n}$, the Alternating Direction Method of Multiplier converges globally and linearly at a rate $O((\ \mathbf{A}\ _F 2 mn)\log(1/\epsilon))$. However, such a rate is related to the problem dimension and the algorithm exhibits a slow and fluctuating tail convergence'' in practice. In this paper, we propose a new variable splitting method of LP and prove that our method has a convergence rate of $O(\ \mathbf{A}\ 2\log(1/\epsilon))$. The proof is based on simultaneously estimating the distance from a pair of primal dual iterates to the optimal primal and dual solution set by certain residuals. In practice, we result in a new first-order LP solver that can exploit both the sparsity and the specific structure of matrix $\mathbf{A}$ and a significant speedup for important problems such as basis pursuit, inverse covariance matrix estimation, L1 SVM and nonnegative matrix factorization problem compared with current fastest LP solvers. Papers published at the Neural Information Processing Systems Conference.